This tutorial illustrates the core visualization utilities available in Ax.
import numpy as np
from ax.service.ax_client import AxClient
from ax.modelbridge.cross_validation import cross_validate
from ax.plot.contour import interact_contour
from ax.plot.diagnostic import interact_cross_validation
from ax.plot.scatter import(
interact_fitted,
plot_objective_vs_constraints,
tile_fitted,
)
from ax.plot.slice import plot_slice
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import render, init_notebook_plotting
init_notebook_plotting()
[INFO 08-01 05:39:01] ax.utils.notebook.plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.
The vizualizations require an experiment object and a model fit on the evaluated data. The routine below is a copy of the Service API tutorial, so the explanation here is omitted. Retrieving the experiment and model objects for each API paradigm is shown in the respective tutorials
noise_sd = 0.1
param_names = [f"x{i+1}" for i in range(6)] # x1, x2, ..., x6
def noisy_hartmann_evaluation_function(parameterization):
x = np.array([parameterization.get(p_name) for p_name in param_names])
noise1, noise2 = np.random.normal(0, noise_sd, 2)
return {
"hartmann6": (hartmann6(x) + noise1, noise_sd),
"l2norm": (np.sqrt((x ** 2).sum()) + noise2, noise_sd)
}
ax_client = AxClient()
ax_client.create_experiment(
name="test_visualizations",
parameters=[
{
"name": p_name,
"type": "range",
"bounds": [0.0, 1.0],
}
for p_name in param_names
],
objective_name="hartmann6",
minimize=True,
outcome_constraints=["l2norm <= 1.25"]
)
[INFO 08-01 05:39:02] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the `verbose_logging` argument to `False`. Note that float values in the logs are rounded to 6 decimal points.
[INFO 08-01 05:39:02] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x1. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 08-01 05:39:02] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x2. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 08-01 05:39:02] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x3. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 08-01 05:39:02] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x4. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 08-01 05:39:02] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x5. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 08-01 05:39:02] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x6. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 08-01 05:39:02] ax.service.utils.instantiation: Created search space: SearchSpace(parameters=[RangeParameter(name='x1', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x3', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x4', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x5', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x6', parameter_type=FLOAT, range=[0.0, 1.0])], parameter_constraints=[]).
[INFO 08-01 05:39:02] ax.modelbridge.dispatch_utils: Using Bayesian optimization since there are more ordered parameters than there are categories for the unordered categorical parameters.
[INFO 08-01 05:39:02] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+GPEI', steps=[Sobol for 12 trials, GPEI for subsequent trials]). Iterations after 12 will take longer to generate due to model-fitting.
for i in range(20):
parameters, trial_index = ax_client.get_next_trial()
# Local evaluation here can be replaced with deployment to external system.
ax_client.complete_trial(trial_index=trial_index, raw_data=noisy_hartmann_evaluation_function(parameters))
[INFO 08-01 05:39:02] ax.service.ax_client: Generated new trial 0 with parameters {'x1': 0.155887, 'x2': 0.769762, 'x3': 0.541296, 'x4': 0.38674, 'x5': 0.968768, 'x6': 0.521329}.
[INFO 08-01 05:39:02] ax.service.ax_client: Completed trial 0 with data: {'hartmann6': (-0.155133, 0.1), 'l2norm': (1.607272, 0.1)}.
[INFO 08-01 05:39:02] ax.service.ax_client: Generated new trial 1 with parameters {'x1': 0.106745, 'x2': 0.635209, 'x3': 0.004259, 'x4': 0.654549, 'x5': 0.042576, 'x6': 0.914425}.
[INFO 08-01 05:39:02] ax.service.ax_client: Completed trial 1 with data: {'hartmann6': (-0.077308, 0.1), 'l2norm': (1.423886, 0.1)}.
[INFO 08-01 05:39:02] ax.service.ax_client: Generated new trial 2 with parameters {'x1': 0.488233, 'x2': 0.915126, 'x3': 0.828658, 'x4': 0.300647, 'x5': 0.247965, 'x6': 0.089467}.
[INFO 08-01 05:39:02] ax.service.ax_client: Completed trial 2 with data: {'hartmann6': (-1.289151, 0.1), 'l2norm': (1.361104, 0.1)}.
[INFO 08-01 05:39:02] ax.service.ax_client: Generated new trial 3 with parameters {'x1': 0.176696, 'x2': 0.020189, 'x3': 0.292348, 'x4': 0.817692, 'x5': 0.669076, 'x6': 0.082727}.
[INFO 08-01 05:39:02] ax.service.ax_client: Completed trial 3 with data: {'hartmann6': (-0.017423, 0.1), 'l2norm': (1.240026, 0.1)}.
[INFO 08-01 05:39:02] ax.service.ax_client: Generated new trial 4 with parameters {'x1': 0.270802, 'x2': 0.831095, 'x3': 0.437319, 'x4': 0.99506, 'x5': 0.818764, 'x6': 0.197312}.
[INFO 08-01 05:39:02] ax.service.ax_client: Completed trial 4 with data: {'hartmann6': (-0.291312, 0.1), 'l2norm': (1.73855, 0.1)}.
[INFO 08-01 05:39:02] ax.service.ax_client: Generated new trial 5 with parameters {'x1': 0.343156, 'x2': 0.198907, 'x3': 0.253663, 'x4': 0.325119, 'x5': 0.587147, 'x6': 0.842011}.
[INFO 08-01 05:39:02] ax.service.ax_client: Completed trial 5 with data: {'hartmann6': (-0.513726, 0.1), 'l2norm': (1.228612, 0.1)}.
[INFO 08-01 05:39:02] ax.service.ax_client: Generated new trial 6 with parameters {'x1': 0.580164, 'x2': 0.996117, 'x3': 0.406833, 'x4': 0.471528, 'x5': 0.979816, 'x6': 0.476224}.
[INFO 08-01 05:39:02] ax.service.ax_client: Completed trial 6 with data: {'hartmann6': (-0.122265, 0.1), 'l2norm': (1.717025, 0.1)}.
[INFO 08-01 05:39:02] ax.service.ax_client: Generated new trial 7 with parameters {'x1': 0.195021, 'x2': 0.639254, 'x3': 0.56095, 'x4': 0.91091, 'x5': 0.1363, 'x6': 0.163947}.
[INFO 08-01 05:39:02] ax.service.ax_client: Completed trial 7 with data: {'hartmann6': (-0.10995, 0.1), 'l2norm': (1.413087, 0.1)}.
[INFO 08-01 05:39:02] ax.service.ax_client: Generated new trial 8 with parameters {'x1': 0.459418, 'x2': 0.225962, 'x3': 0.40857, 'x4': 0.618237, 'x5': 0.445713, 'x6': 0.18851}.
[INFO 08-01 05:39:02] ax.service.ax_client: Completed trial 8 with data: {'hartmann6': (-0.140993, 0.1), 'l2norm': (1.033309, 0.1)}.
[INFO 08-01 05:39:02] ax.service.ax_client: Generated new trial 9 with parameters {'x1': 0.886185, 'x2': 0.168718, 'x3': 0.389569, 'x4': 0.490322, 'x5': 0.889023, 'x6': 0.232734}.
[INFO 08-01 05:39:02] ax.service.ax_client: Completed trial 9 with data: {'hartmann6': (-0.064516, 0.1), 'l2norm': (1.626755, 0.1)}.
[INFO 08-01 05:39:02] ax.service.ax_client: Generated new trial 10 with parameters {'x1': 0.033755, 'x2': 0.243473, 'x3': 0.956301, 'x4': 0.467895, 'x5': 0.562997, 'x6': 0.966005}.
[INFO 08-01 05:39:02] ax.service.ax_client: Completed trial 10 with data: {'hartmann6': (-0.226579, 0.1), 'l2norm': (1.648584, 0.1)}.
[INFO 08-01 05:39:02] ax.service.ax_client: Generated new trial 11 with parameters {'x1': 0.814331, 'x2': 0.916459, 'x3': 0.682157, 'x4': 0.870823, 'x5': 0.018916, 'x6': 0.186797}.
[INFO 08-01 05:39:02] ax.service.ax_client: Completed trial 11 with data: {'hartmann6': (0.039689, 0.1), 'l2norm': (1.743077, 0.1)}.
[INFO 08-01 05:39:24] ax.service.ax_client: Generated new trial 12 with parameters {'x1': 0.389363, 'x2': 0.745403, 'x3': 0.804597, 'x4': 0.286468, 'x5': 0.253401, 'x6': 0.081842}.
[INFO 08-01 05:39:24] ax.service.ax_client: Completed trial 12 with data: {'hartmann6': (-1.390588, 0.1), 'l2norm': (1.191586, 0.1)}.
[INFO 08-01 05:39:43] ax.service.ax_client: Generated new trial 13 with parameters {'x1': 0.437229, 'x2': 0.585082, 'x3': 0.727994, 'x4': 0.292874, 'x5': 0.304947, 'x6': 0.09244}.
[INFO 08-01 05:39:43] ax.service.ax_client: Completed trial 13 with data: {'hartmann6': (-0.694049, 0.1), 'l2norm': (1.312636, 0.1)}.
[INFO 08-01 05:40:02] ax.service.ax_client: Generated new trial 14 with parameters {'x1': 0.272244, 'x2': 0.784335, 'x3': 0.828079, 'x4': 0.277645, 'x5': 0.232018, 'x6': 0.073354}.
[INFO 08-01 05:40:02] ax.service.ax_client: Completed trial 14 with data: {'hartmann6': (-0.877103, 0.1), 'l2norm': (1.452179, 0.1)}.
[INFO 08-01 05:40:36] ax.service.ax_client: Generated new trial 15 with parameters {'x1': 0.453477, 'x2': 0.808182, 'x3': 0.809349, 'x4': 0.285014, 'x5': 0.354633, 'x6': 0.086063}.
[INFO 08-01 05:40:36] ax.service.ax_client: Completed trial 15 with data: {'hartmann6': (-1.260823, 0.1), 'l2norm': (1.416095, 0.1)}.
[INFO 08-01 05:41:12] ax.service.ax_client: Generated new trial 16 with parameters {'x1': 0.442642, 'x2': 0.726895, 'x3': 0.789277, 'x4': 0.290391, 'x5': 0.155806, 'x6': 0.086576}.
[INFO 08-01 05:41:12] ax.service.ax_client: Completed trial 16 with data: {'hartmann6': (-1.202536, 0.1), 'l2norm': (1.201796, 0.1)}.
[INFO 08-01 05:42:11] ax.service.ax_client: Generated new trial 17 with parameters {'x1': 0.436152, 'x2': 0.762074, 'x3': 0.888799, 'x4': 0.2804, 'x5': 0.216349, 'x6': 0.029384}.
[INFO 08-01 05:42:11] ax.service.ax_client: Completed trial 17 with data: {'hartmann6': (-1.319296, 0.1), 'l2norm': (1.345403, 0.1)}.
[INFO 08-01 05:43:06] ax.service.ax_client: Generated new trial 18 with parameters {'x1': 0.419898, 'x2': 0.768745, 'x3': 0.679594, 'x4': 0.235524, 'x5': 0.185111, 'x6': 0.023188}.
[INFO 08-01 05:43:06] ax.service.ax_client: Completed trial 18 with data: {'hartmann6': (-0.758758, 0.1), 'l2norm': (1.124569, 0.1)}.
[INFO 08-01 05:43:47] ax.service.ax_client: Generated new trial 19 with parameters {'x1': 0.445078, 'x2': 0.77764, 'x3': 0.803208, 'x4': 0.226002, 'x5': 0.162322, 'x6': 0.011447}.
[INFO 08-01 05:43:47] ax.service.ax_client: Completed trial 19 with data: {'hartmann6': (-0.808256, 0.1), 'l2norm': (1.195499, 0.1)}.
The plot below shows the response surface for hartmann6 metric as a function of the x1, x2 parameters.
The other parameters are fixed in the middle of their respective ranges, which in this example is 0.5 for all of them.
# this could alternately be done with `ax.plot.contour.plot_contour`
render(ax_client.get_contour_plot(param_x="x1", param_y="x2", metric_name='hartmann6'))
[INFO 08-01 05:43:47] ax.service.ax_client: Retrieving contour plot with parameter 'x1' on X-axis and 'x2' on Y-axis, for metric 'hartmann6'. Remaining parameters are affixed to the middle of their range.
The plot below allows toggling between different pairs of parameters to view the contours.
model = ax_client.generation_strategy.model
render(interact_contour(model=model, metric_name='hartmann6'))
This plot illustrates the tradeoffs achievable for 2 different metrics. The plot takes the x-axis metric as input (usually the objective) and allows toggling among all other metrics for the y-axis.
This is useful to get a sense of the pareto frontier (i.e. what is the best objective value achievable for different bounds on the constraint)
render(plot_objective_vs_constraints(model, 'hartmann6', rel=False))
CV plots are useful to check how well the model predictions calibrate against the actual measurements. If all points are close to the dashed line, then the model is a good predictor of the real data.
cv_results = cross_validate(model)
render(interact_cross_validation(cv_results))
Slice plots show the metric outcome as a function of one parameter while fixing the others. They serve a similar function as contour plots.
render(plot_slice(model, "x2", "hartmann6"))
Tile plots are useful for viewing the effect of each arm.
render(interact_fitted(model, rel=False))
Total runtime of script: 5 minutes, 4.08 seconds.